New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
35
KMF
1d
0
Hi all!  A heads up that it is that time again - Magnify mentee applications are open :) More below. Thank you so much :) ___ "Magnify Mentoring applications are now open for women, non-binary, and trans people of all genders who are looking to pursue high impact careers. You can apply here. Applications will close on the 10th July 2024.  Past mentees have been particularly successful when they have a sense of what they would like to achieve through mentorship. The matching process normally takes us between 4-6 weeks. We look to match pairings based on the needs and availability of the mentee and mentor, their goals, career paths, and what skills they are looking to develop.  On average, mentees and mentors meet once a month for 60-90 minutes with a series of optional prompt questions prepared by our team. In the post-round feedback form, the average for “I recommend being a Magnify mentee” was 9.28/10 in Round 3 and 9.4/10 in Round 4. You can see testimonies from some of our mentees  here, here and here.. Some reported outcomes for mentees were: 1. Advice, guidance, and resources on achieving goals.  2. Connection and support in pursuing opportunities (jobs, funding).  3. Confidence-building. 4. Specific guidance (How to network? How to write a good resume?). 5. Joining a welcoming community for support through challenges"
If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I'd like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.   Edited: @Joseph Lemien asked for positions I hold: * The EA forum should not seek to have opinions on non-EA events. I don't mean individual EAs shouldn't have opinions, I mean that as a group we shouldn't seek to judge individual event. I don't think we're very good at it. *  I don't like Hanania's behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally. * EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I'll complain. But do I want EAGs to invite a load of manifest's edgiest speakers? Not particularly.  * It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that's up to them. * I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me that the couples who stay together are either great at settling disputes or almost never fight. We fight a bit and we aren't great at settling it. I guess I'd like us to fight less (say we aren't interested in conflicty posts) or to get better at making up (come to consensus afterwards, grow and change) * Only 1-6% of attendees at manifest had issues along eugenicsy lines in the feedback forms. I don't think this is worth a huge change.  * I would imagine it's worth $10mns to avoid EA becoming a space full of people who fearmonger based on the races, genders or sexualities of others. I don't think that's very likely. * To me, current systems for taxing discussion of eugenics seem fine. There is the odd post that gets downvoted. If it were good and convincing it would be upvoted. so far it hasn't been. seems fine. I am not scared of bad arguments [1] * Black people are probably not avoiding Manifest because of these speakers because that theory doesn't seem to hold up for tech, rationalism[2], EA or several other communities.  * I don't know what people want when they point at "distancing EA from rationalism" * Manifest was fun for me, and it and several other events I went to in the bay felt like I let out a breath that I never knew I was holding. I am pretty careful what I say about you all sometimes and it's tiring. I guess that's true for some of you too. It was nice (and surprisingly un-edgy for me) to be in a space where I didn't have to worry about offending people a lot. I enjoy having spaces where I feel safer. * There is a tradeoff between feeling safe and expression. I would have more time for some proposals if people acknowledged the costs they are putting on others. Even small costs, even costs I would willingly pay are still costs and to have that be unmentionable feels gaslighty.  * There are some incentives in this community to be upset about things and to be blunt in response. Both of these things seem bad. I'd prefer incentives towards working together to figure out how the world is and impliment the most effective morally agreeable changes per unit resource. This requires some truthseeking, but probably not the maximal amount. and some kindness, but probably not the maximal amount.  1. ^ Unless there was some kind of flooding of the forum to boost posts repeatedly. 2. ^ LessWrong doesn't have any significant discussion of eugenics either. As I (weakly) understand it they kicked many posters off who wanted to talk about such things.
Why are seitan products so expensive? I strongly believe in the price, taste, convenience hypothesis. If/when non-animal foods are cheaper and tastier, I expect the west to undergo a moral cascade where factory farming in a very short timespan will go from being common place to illegal. I know that in the animal welfare space, this view point is often considered naive, but I remain convinced its true. My mother buys the expensive vegan mayonnaise because it's much tastier than the regular mayonnaise. I still eat dairy and eggs because the vegan alternatives suck. What I don't understand is why vegan alternatives have proven so difficult to make cheap and tasty. Are there any good write ups on this? Like when I go to a supermarket in Copenhagen, every seitan product will charge a significant markup over the raw cost of the ingredients (Amazon will sell you kilos of seitan flour at very little cost). Do consumers have sufficiently inelastic preferences that a small market high-markup is the most profitable strategy? Is the final market too small for producers to reach economies of scale for seitan, or is it just difficult to bootstrap? I would love to better understand what the demand curves look like for various categories of vegan products, as I really can't wrap my mind around how the current equilibrium came about
I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work. - Very little biorisk content here, perhaps because of info-hazards. - Little technical AI safety work here, in part because that's more for LessWrong / Alignment Forum. - Little AI governance work here, for whatever reason. - Not too much innovative, big-picture longtermist prioritization projects happening at the moment, from what I understand.  - The cause of "EA community building" seems to be fairly stable, not much bold/controversial experimentation, from what I can tell. - Fairly few updates / discussion from grantmakers. OP is really the dominant one, and doesn't publish too much, particularly about their grantmaking strategies and findings. It's been feeling pretty quiet here recently, for my interests. I think some important threads are now happening in private slack / in-person conversations or just not happening. 
Post: OAI NDA drama, do we know that Anthropic does not do similar things? I heard something that reassured me but I no longer know what it was. Interested in what people have heard or know (feel free to DM me) or general takes on the sitaution.

Popular comments

Recent discussion

Overview

Recently Dwarkesh Patel released an interview with François Chollet (hereafter Dwarkesh and François). I thought this was one of Dwarkesh's best recent podcasts, and one of the best discussions that the AI Community has had recently. Instead of subtweeting those...

Continue reading

We know now that a) your results aren't technically SOTA

I think my results are probably SOTA based on more recent updates.

It's not an LLM solution, it's an LLM + your scaffolding + program search, and I think that's importantly not the same thing. 

I feel like this is a pretty strange way to draw the line about what counts as an "LLM solution".

Consider the following simplified dialogue as an example of why I don't think this is a natural place to draw the line:

Human skeptic: Humans don't exhibit real intelligence. You see, they'll never do something as... (read more)

1
Ryan Greenblatt
This seems wrong. It does use constants from the historical deep learning field to provide guesses for parameters and it assumes that compute is an important driver of AI progress. These are much weaker assumptions than you seem to be implying. Note also that this work is based on earlier work like bio anchors which was done just as the current paradigm and scaling were being established. (It was published in the same year as Kaplan et al.)
2
Ryan Greenblatt
I think this seems like mostly a fallacy. (I feel like there should be a post explaning this somewhere.) Here is an alternative version of what you said to indicate why I don't think this is a very interesting claim: Sure you can have a very smart quadriplegic who is very knowledgable. But they won't do anything until you let them control some actuator.  If your view is that "prediction won't result in intelligence", fair enough, though its notable that the human brain seems to heavily utilize prediction objectives.

i. What is futarchy

Futarchy is a proposed system of governance that combines prediction markets with democratic decision-making. Developed by Robin Hanson (who is at my university!), it improves policy outcomes by harnessing the efficiency of markets. Under futarchy, citizens would vote on desired outcomes or metrics of success, but not on specific policies. Instead, prediction markets would be used to determine which policies are most likely to achieve the voted-upon goals. Traders in these markets would bet on the expected outcomes of different policy options, with the policies predicted to be most successful being automatically implemented. You vote on goals, but bet on beliefs. 

What I want to draw your attention to is an unintended, but beneficial, side effect of such a system. Policy will reflect the utility functions of the people, even if they are non-linear. Indeed, if there...

Continue reading

What needs to be true for an estimate to be reasonable?

Pure Earth is a GiveWell grantee that works to reduce lead and mercury exposure. In an August 2023 post, they provided a "preliminary analysis" suggesting that their lead reduction program in Bangladesh "can avert an...

Continue reading
1
Seth Ariel Green
Sure, there is an intuitive plausibility to this. But how extraordinary must the political dysfunction be for no one within Bangladesh to be capable of solving this themselves through political agitation without NGO support? If the DALY calculations are to be believed, the potential gains are enormous and comparatively cheap. As an outsider to the situation, I am looking for context on why this hasn't happened yet. A good theory in the social sciences will abstract from specifics of the situation, or map a theory onto specific things that have happened (or that haven't), rather than making the general observation that collective action problems cause some public goods to be underprovisioned.
2
JamesSnowden
You might find this article helpful for context: https://undark.org/2023/07/19/the-vice-of-spice-confronting-lead-tainted-turmeric/ Fwiw I’m sympathetic to your general point

Thanks, I think this was featured on Marginal Revolution last year — definitely good background.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Hi! I’m Nico and I’m on the research team at Founders Pledge. We noticed that the way we compare current to future income benefits is in tension with how we compare income benefits across interventions. However, aligning these two comparisons—choosing the same function ...

Continue reading

Yup, that's an accurate summary of my beliefs (with the caveat that is non-critical and can be replaced with a constant or whatever else you want; only is essential). Put another way, is a single preference parameter that determines the marginal utility of income, and that affects how we value both income and health. I think any other assumption leads to internal inconsistency, or doesn't represent utility maximization.

Does that sound right? If so, my view would be that valuing an extra life year according to for some is a funct

... (read more)

The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts...

Continue reading

I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don't think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:

  1. Write a paper about <concept> that requires a structural mo
... (read more)

Why are seitan products so expensive?

I strongly believe in the price, taste, convenience hypothesis. If/when non-animal foods are cheaper and tastier, I expect the west to undergo a moral cascade where factory farming in a very short timespan will go from being common place...

Continue reading

Some helpful thoughts on this are here.

I think that the evidence price-taste-convenience hypothesis is unfortunately fairly weak given available evidence, for what it is worth. This analysis and this analysis are, I think, the best write ups on this.

Bravo for an excellent length-to-utility ratio! This comment "punches above its weight."

My hot take is that vegan alternatives generally just don't get to scale where the marginal unit cost is low.

Joseph Lemien posted a Quick Take

In discussions (both online and in-person) about applicant experience in hiring rounds, I've heard repeatedly that applicants want feedback. Giving in-depth feedback is costly (and risky), but here is an example I have received that strikes me as low-cost and low-risk. I've tweaked it a little to make it more of a template.

"Based on your [resume/application form/work sample], our team thinks you're a potential fit and would like to invite you to the next step of the application process: a [STEP]. You are being asked to complete [STEP] because you are currently in the top 20% of all applicants."

The phrasing "you are currently in the top 20% of all applicants" is nice. I like that. I haven't ever seen that before, but I think it is something that EA organizations (or hiring teams at any organization) could easily adapt and use in many hiring rounds. While you don't always know exactly what percentile a candidate falls into, you can give broad/vague information, such as being in the top X%. It is a way to give a small amount of feedback to candidates without requiring a large amount of time/effort and without taking on legal risk.

Continue reading

I was chatting recently to someone who had difficulty knowing how to orient to x-risk work (despite being a respected professional working in that field). They expressed that they didn't find it motivating at a gut level in the same way they did with poverty or animal stuff...

Continue reading